secretly suffice multi-source domain adaptation
Your Classifier can Secretly Suffice Multi-Source Domain Adaptation
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain, under a domain-shift. Existing methods aim to minimize this domain-shift using auxiliary distribution alignment objectives. In this work, we present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision. Thus, we aim to utilize implicit alignment without additional training objectives to perform adaptation. To this end, we use pseudo-labeled target samples and enforce a classifier agreement on the pseudo-labels, a process called Self-supervised Implicit Alignment (SImpAl). We find that SImpAl readily works even under category-shift among the source domains. Further, we propose classifier agreement as a cue to determine the training convergence, resulting in a simple training algorithm. We provide a thorough evaluation of our approach on five benchmarks, along with detailed insights into each component of our approach.
Review for NeurIPS paper: Your Classifier can Secretly Suffice Multi-Source Domain Adaptation
Weaknesses: 1. Figure 2 is very confusing to me. Figure (2a) seems to train an individual classifier for each domain. Figure (2b) seems also to train an individual classifier for each domain but also require the agreement across all classifiers for all samples. My question is what is the difference between Figure 2b and the method only train one classifier for all domains? It seems to me that training several classifiers and alignment them with a loss is same as only training one classifier. Furthermore, I agree that alignment classifiers can align features across domains as well, but the previous methods also use some distance loss or adversarial learning to align classifiers or features, which will reach a similar performance.
Review for NeurIPS paper: Your Classifier can Secretly Suffice Multi-Source Domain Adaptation
This paper proposes a simple self-training mechanism for multi-source domain adaptation. In a pre-training step, the features extracted by the deep learning networks from different sources are aligned using pseudo labels generated by independent softmax classifiers. All the reviewers agree that one of the strong contributions of this paper is the detailed experimental analysis that provide good insights to the reader on the topic of multi-source domain adaptation. This compensates for the limited novelty in the components used in this model such as multiple softmax classifiers which have been employed in related context even if they are not specifically for multi-source domain adaptation. Reviewers have highlighted a few shortcomings that could be improved in the final version of the paper and the authors have agreed to doing so in the response.